324 research outputs found

    A Bayesian Collocation Integral Method for Parameter Estimation in Ordinary Differential Equations

    Full text link
    Inferring the parameters of ordinary differential equations (ODEs) from noisy observations is an important problem in many scientific fields. Currently, most parameter estimation methods that bypass numerical integration tend to rely on basis functions or Gaussian processes to approximate the ODE solution and its derivatives. Due to the sensitivity of the ODE solution to its derivatives, these methods can be hindered by estimation error, especially when only sparse time-course observations are available. We present a Bayesian collocation framework that operates on the integrated form of the ODEs and also avoids the expensive use of numerical solvers. Our methodology has the capability to handle general nonlinear ODE systems. We demonstrate the accuracy of the proposed method through a simulation study, where the estimated parameters and recovered system trajectories are compared with other recent methods. A real data example is also provided

    Determine OWA operator weights using kernel density estimation

    Get PDF
    Some subjective methods should divide input values into local clusters before determining the ordered weighted averaging (OWA) operator weights based on the data distribution characteristics of input values. However, the process of clustering input values is complex. In this paper, a novel probability density based OWA (PDOWA) operator is put forward based on the data distribution characteristics of input values. To capture the local cluster structures of input values, the kernel density estimation (KDE) is used to estimate the probability density function (PDF), which fits to the input values. The derived PDF contains the density information of input values, which reflects the importance of input values. Therefore, the input values with high probability densities (PDs) should be assigned with large weights, while the ones with low PDs should be assigned with small weights. Afterwards, the desirable properties of the proposed PDOWA operator are investigated. Finally, the proposed PDOWA operator is applied to handle the multicriteria decision making problem concerning the evaluation of smart phones and it is compared with some existing OWA operators. The comparative analysis shows that the proposed PDOWA operator is simpler and more efficient than the existing OWA operator

    Confucius Queue Management: Be Fair But Not Too Fast

    Full text link
    When many users and unique applications share a congested edge link (e.g., a home network), everyone wants their own application to continue to perform well despite contention over network resources. Traditionally, network engineers have focused on fairness as the key objective to ensure that competing applications are equitably and led by the switch, and hence have deployed fair queueing mechanisms. However, for many network workloads today, strict fairness is directly at odds with equitable application performance. Real-time streaming applications, such as videoconferencing, suffer the most when network performance is volatile (with delay spikes or sudden and dramatic drops in throughput). Unfortunately, "fair" queueing mechanisms lead to extremely volatile network behavior in the presence of bursty and multi-flow applications such as Web traffic. When a sudden burst of new data arrives, fair queueing algorithms rapidly shift resources away from incumbent flows, leading to severe stalls in real-time applications. In this paper, we present Confucius, the first practical queue management scheme to effectively balance fairness against volatility, providing performance outcomes that benefit all applications sharing the contended link. Confucius outperforms realistic queueing schemes by protecting the real-time streaming flows from stalls in competing with more than 95% of websites. Importantly, Confucius does not assume the collaboration of end-hosts, nor does it require manual parameter tuning to achieve good performance

    Aggregation Weighting of Federated Learning via Generalization Bound Estimation

    Full text link
    Federated Learning (FL) typically aggregates client model parameters using a weighting approach determined by sample proportions. However, this naive weighting method may lead to unfairness and degradation in model performance due to statistical heterogeneity and the inclusion of noisy data among clients. Theoretically, distributional robustness analysis has shown that the generalization performance of a learning model with respect to any shifted distribution is bounded. This motivates us to reconsider the weighting approach in federated learning. In this paper, we replace the aforementioned weighting method with a new strategy that considers the generalization bounds of each local model. Specifically, we estimate the upper and lower bounds of the second-order origin moment of the shifted distribution for the current local model, and then use these bounds disagreements as the aggregation proportions for weightings in each communication round. Experiments demonstrate that the proposed weighting strategy significantly improves the performance of several representative FL algorithms on benchmark datasets

    Improving Multicast Stability in Mobile Multicast Scheme using Motion Prediction

    Get PDF
    Abstract. Stability is an important issue in multicast, especially in mobile environment where joining and leaving behaviors occur much more frequently. In this paper, we propose a scheme to improve the multicast stability by the use of motion prediction. The mobile node (MN) predicts the staying time before entering the new network, if the time is long enough, it will ask the new network to join the multicast tree as usual. Otherwise, the new network should create a tunnel to the multicast agent of MN to receive multicast packets. Considering that networks usually have different power range, the staying time is not predicted directly, and the Average Staying Time is used instead. The prediction algorithm is effective but practical which requires little calculation time and memory size. The simulation results show that the proposed scheme can improve the stability of multicast tree remarkably while bring much smaller cost

    Water refilling along vessels at initial stage of willow cuttage revealed by move contrast CT

    Get PDF
    Cuttage is a widely used technique for plant propagation, whose success relies on the refilling for water transport recovery. However, requirements for refilling characterization studies, including large penetration depth, fast temporal resolution and high spatial resolution, cannot be reached simultaneously via conventional imaging techniques. So far, the dynamic process of water refilling along the vessels at the initial stage of cuttage, as well as its characteristics, remains unclear. Hereby, we developed a move contrast X-ray microtomography method which achieves 3D dynamic non-destructive imaging of water refilling at the initial stage of willow branch cuttage, without the aid of any contrast agent. Experimental results indicate three primary refilling modalities in vessels: 1) the osmosis type, mainly manifested by the osmosis of tissue through the vessel wall into the cavity; 2) the linear type, revealed as the tissue permeates to a certain extent where the liquid column in the vessels is completely formed; and 3) an osmosis-linear mixed type refilling as an intermediate state. Further analysis also exhibits a “temporal-spatial relay” mode of refilling between adjacent vessels. Since the vessel length is quite limited, the cavitation and the relay refilling mode of vessels can be an important way to achieve long-distance water transport

    FERN: Leveraging Graph Attention Networks for Failure Evaluation and Robust Network Design

    Full text link
    Robust network design, which aims to guarantee network availability under various failure scenarios while optimizing performance/cost objectives, has received significant attention. Existing approaches often rely on model-based mixed-integer optimization that is hard to scale or employ deep learning to solve specific engineering problems yet with limited generalizability. In this paper, we show that failure evaluation provides a common kernel to improve the tractability and scalability of existing solutions. By providing a neural network function approximation of this common kernel using graph attention networks, we develop a unified learning-based framework, FERN, for scalable Failure Evaluation and Robust Network design. FERN represents rich problem inputs as a graph and captures both local and global views by attentively performing feature extraction from the graph. It enables a broad range of robust network design problems, including robust network validation, network upgrade optimization, and fault-tolerant traffic engineering that are discussed in this paper, to be recasted with respect to the common kernel and thus computed efficiently using neural networks and over a small set of critical failure scenarios. Extensive experiments on real-world network topologies show that FERN can efficiently and accurately identify key failure scenarios for both OSPF and optimal routing scheme, and generalizes well to different topologies and input traffic patterns. It can speed up multiple robust network design problems by more than 80x, 200x, 10x, respectively with negligible performance gap
    corecore